Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Gulshan Kumar, Dr. Kokila S
DOI Link: https://doi.org/10.22214/ijraset.2024.58035
Certificate: View Certificate
Satellite images provide a wealth of information about the Earth\'s surface, but extracting meaningful insights from these vast datasets requires sophisticated analysis techniques. Deep learning models have emerged as powerful tools for multi-class satellite image classification and prediction, offering superior accuracy and efficiency compared to traditional methods. This comprehensive review delves into the latest advancements in deep learning architectures specifically designed for satellite image processing. It critically examines the performance of various convolutional neural network (CNN) architectures, exploring their strengths and limitations in handling diverse satellite image datasets. The review further investigates the integration of transfer learning and domain adaptation techniques to address challenges like limited training data and domain shift. Additionally, it sheds light on recent developments in interpretability and explain ability of deep learning models for satellite image analysis, enabling users to gain deeper understanding of the decision-making process behind predictions. By providing a holistic overview of the current landscape and future directions, this review serves as a valuable resource for researchers and practitioners working in the field of satellite image analysis and deep learning.
I. INTRODUCTION
Satellite images have proven to be an indispensable asset across diverse fields, ranging from environmental monitoring to urban planning. The vast amount of data generated by satellites necessitates advanced techniques for efficient analysis and interpretation. Among the myriad approaches, deep learning models have emerged as powerful tools for multi-class satellite image classification and prediction. This comprehensive review delves into the intricate landscape of deep learning applications in this domain, exploring the methodologies, challenges, and breakthroughs that have shaped the current state of the field.
In recent years, deep learning has garnered immense attention for its ability to automatically learn hierarchical representations from complex data. Satellite images, with their rich spatial information, present a unique set of challenges that traditional methods struggle to address effectively. Deep learning models, particularly convolutional neural networks (CNNs), have exhibited remarkable prowess in capturing intricate patterns and spatial dependencies within satellite imagery. As a result, they have become indispensable for tasks like land cover classification, crop monitoring, and disaster prediction.
The core of this review is dedicated to unraveling the various deep learning architectures employed for multi-class satellite image classification. From classic CNNs to more sophisticated models like recurrent neural networks (RNNs) and attention mechanisms, we explore how these architectures contribute to the accurate identification of diverse land cover types. The nuances of pre-processing techniques, such as data augmentation and normalization, are also scrutinized to underscore their impact on model robustness and generalization.
One of the critical challenges in this domain lies in the scarcity of labeled data for training deep learning models. Transfer learning has become a promising solution, representing a technique where a model initially trained on a sizable dataset is subsequently fine-tuned for a specific task. This review dissects the various transfer learning strategies employed in multi-class satellite image classification, shedding light on their effectiveness in overcoming data limitations.
Furthermore, the integration of auxiliary data sources, such as meteorological data and temporal information, adds an additional layer of complexity to the analysis. Deep learning models designed to handle spatiotemporal data play a pivotal role in predicting dynamic changes in land cover and environmental conditions. We explore the advancements in these models, emphasizing their contribution to accurate prediction and monitoring over time.
As we navigate through the intricacies of deep learning for multi-class satellite image analysis, it is crucial to address the ethical considerations and interpretability challenges associated with these models. The transparency of decision-making processes becomes paramount, especially in applications that impact environmental policies, resource management, and disaster response.
II. RELATED WORK
In this segment, various research endeavors conducted over the past four years concerning the classification of satellite images are examined.
In 2020, Waldner, et al. conducted a study on the use of deep learning for automated field boundary extraction from satellite imagery. The authors introduced ResUNet-a, designed for multitasking semantic segmentation, which proved to be highly effective in accurately identifying field boundaries. The study emphasized the importance of consensus-based averaging to enhance accuracy, particularly when utilizing multiple image dates. The authors also discussed the impact of fine-tuning thresholds during post-processing on model performance, highlighting the importance of local optimization. The research further explored the potential for semantic segmentation combined with multitasking to advance field extraction from satellite imagery. The study presented empirical evidence comparing the accuracy of ResUNet-a with conventional edge detection methods.
The mapping accuracy of the cropping area at the primary site reached approximately 90%, a level of precision comparable to leading methods in cropland classification. The study further demonstrated the capability of ResUNet-a to identify fields and delineate their boundaries consistently across various resolutions and sensor types. This suggests that the model exhibits scale invariance and operates on an object-based level. The study concluded by outlining future research directions, including the evaluation of model generalization across diverse cropping systems and the exploration of minimum training set size requirements. Overall, the study provides valuable insights into the application of deep learning for automated field boundary extraction, with implications for digital agricultural services and beyond.
In 2020, Abbas Kadhim, et al. conducted a study which explores the application of deep learning techniques, specifically CNN, for the classification of satellite images. The authors discuss three main classes of image classification methods
Methods based on handcrafted features, unsupervised feature learning, and deep feature learning are employed in the analysis. They emphasize the effectiveness of deep learning, particularly the use of pretrained networks such as Resnet50, AlexNet, VGG19, and GoogleNet, for feature extraction and image classification. The study involved testing these pretrained CNN models on different datasets, including SAT4, SAT6, and UC Merced Land, and comparing their performance. The results indicate that the Resnet50 model outperformed the other models, achieving 98% accuracy for UC Merced Land dataset, 95.8% for SAT4, and 94.1% for SAT6. The authors highlight the superiority of Resnet50 in extracting advanced features for satellite image classification, demonstrating its potential for diverse applications beyond satellite imagery. Overall, the study underscores the significance of deep learning and CNN in effectively classifying satellite images, with Resnet50 emerging as a particularly promising model for this task.
In 2020, Yuan, et al. published a comprehensive literature review on the use of deep learning (DL) in environmental remote sensing. The authors discussed the potential of DL methods to improve the accuracy and efficiency of earth environmental monitoring, as well as the challenges and limitations of these methods. The authors highlighted several specific applications of DL in environmental remote sensing, including land cover classification, estimation of evapotranspiration, and prediction of air quality. They also discussed the advantages and drawbacks of deep learning in contrast to conventional machine learning approaches, like support vector machines and random forests, should be considered. One major challenge in the use of DL for environmental remote sensing is the limited availability of spatiotemporally localized data, which can lead to overfitting and poor generalization. To address this challenge, the authors discussed the use of transfer learning and data augmentation techniques. Overall, the authors concluded that DL methods hold great potential for improving earth environmental monitoring, but further research is needed to address the challenges of limited data and computational cost. The authors also emphasized the importance of developing DL-based algorithms with global validity in environmental monitoring.
In 2020, Amit Kumar Rai, et al. presented a comprehensive study focused on the utilization of the Brovey transform, principal component analysis (PCA), and Convolutional Neural Network (CNN) for efficient image classification. The study highlighted the significance of radiometric calibration in converting digital numbers into reflectance values, enabling accurate analysis of radiation particles measured by sensors. The Brovey transform was introduced as a technique for RGB image fusion, integrating complementary information from different bands into a single high-resolution image. The results demonstrated the effectiveness of the proposed method, showcasing high accuracy in classifying images into distinct land cover types. Evaluation metrics, including the computation of kappa coefficient, validated the superior performance of the approach. Comparative analysis with state-of-the-art methods further emphasized the outperformance of the proposed technique.
The study concluded by affirming the potential of the method for resolving land cover classification challenges, indicating its applicability in addressing similar classification problems in satellite imagery.
In 2021, Ankita, et al. utilized the VGG16 model with batch normalization for feature extraction. They employed an entropy-based loss function with adaptive learning rate optimization during training. The algorithm showcased proficiency in accurately classifying land cover types in Synthetic Aperture Radar (SAR) images, including urban settlements, water bodies, agricultural fields, and forest areas. The study incorporated transfer learning to generate labels for dual-polarized images, leveraging weights acquired from a hybrid polarized image model. The proposed approach achieved an average accuracy of 89.70% for hybrid polarized SAR images and 86.08% for dual-polarized SAR images. These findings underscore the potential of deep learning in remote sensing applications, particularly in image classification. The utilization of the VGG16 model with batch normalization for feature extraction proved to be effective in precisely classifying land cover types in SAR images. The proposed unsupervised learning algorithm provides a promising approach for clustering hybrid and dual-polarized images for land cover classification. Overall, the study highlights the importance of utilizing advanced machine learning techniques in remote sensing for accurate and efficient land cover classification.
In 2021, Zhang et al. conducted a study utilized TensorFlow to construct a convolutional neural network aimed at generating binary classification outputs for object types in satellite images, achieving a recognition rate of over 97% on test data. The model's training results demonstrated high accuracy for both training and test data, with 10 randomly selected test images all correctly identified. The study also discussed the historical development of convolutional neural networks, highlighting their advantages over traditional image classification methods, such as direct input of original images without complex pre-processing and automatic feature learning from training data. Furthermore, the study emphasized the importance of addressing challenges such as sample number balance and over-fitting to further enhance the recognition accuracy of the model. The implications of the study extend to applications in military deployment, disaster relief, and earthquake forecasting, showcasing the potential for satellite image recognition to contribute to technology innovation and societal impact.
In 2021, Alhichri et al. conducted a study on the classification of remote sensing images using the EfficientNet-B3 CNN model with attention. The paper addresses the challenging problem of scene classification in remote sensing applications and proposes a deep attention CNN model to improve accuracy. The authors compared their proposed model with several state-of-the-art classification methods on two datasets: UC Merced and KSA. The results showed that the proposed model outperformed the other methods on both datasets, achieving an accuracy of 97.5% on UC Merced and 98.5% on KSA. The authors also investigated the impact of dataset size on model performance and found that deep learning models perform better when trained on more data. However, they noted that the version of the dataset they used had 950 images, while some other methods used a version with 1005 images, which partially explains the higher performance of those methods. The paper also discusses the attention mechanism used in the proposed model, which allows the model to focus on important features and improve classification accuracy. The authors noted that the attention mechanism can be applied to other CNN models to improve their performance in remote sensing applications. Overall, the study provides valuable insights into the use of deep learning models and attention mechanisms for scene classification in remote sensing applications. The results demonstrate the effectiveness of the proposed model and its potential for practical applications in fields such as agriculture, forestry, and urban planning.
In 2022, Roselin Mary et al. proposed the study aimed to address the limitations of traditional classification methods, which do not make use of geographical information and may produce congested maps. The proposed model employed convolutional neural networks (CNNs) and multiscale feature integration to enhance the precision of categorizing land remote sensing images. The research revealed that incorporating multiscale feature fusion improved the semantic expression of features related to multiscale targets and very small objects, resulting in higher rates of identification. Previous studies have also investigated the application of deep learning for image classification in remote sensing. Dong et al. devised a deep learning network with feature ensemble for classifying land cover using very high-resolution optical remote sensing images. Heydari and Mountrakis conducted a meta-analysis of deep neural networks in remote sensing, finding that deep learning methods surpassed traditional approaches in terms of classification accuracy. In summary, the integration of deep learning into remote sensing image classification exhibits potential for enhancing accuracy and efficiency across various domains. Nevertheless, further research is necessary to refine and validate these methods for practical applications.
In 2022, Jarrallah, et al. conducted a survey on satellite image classification using convolutional neural networks (CNN). The authors highlighted the challenges in classifying satellite images, including the presence of noise, variability in illumination, and the need for high-resolution data. They discussed how CNN can be used to address these challenges by automatically learning features from the data and improving classification accuracy.
The survey also explored the potential applications of satellite image classification using CNN in urban planning and environmental monitoring. The authors discussed how satellite images can be used to map urban areas, monitor land cover changes, and detect natural disasters. They also highlighted the importance of accurate classification for environmental monitoring, such as identifying areas at risk of desertification or deforestation. The survey reviewed recent studies on satellite image classification using CNN, including the use of high-resolution satellite data and new multi-scale deep learning models. The authors discussed the performance of these models in comparison to traditional methods such as support vector machines and random forests. They also highlighted the importance of data pre-processing and feature extraction in improving classification accuracy. Overall, the survey contributes to the existing knowledge foundation in the field of satellite image classification and data processing. The authors provide insights into the challenges and advancements in satellite sensing technology and highlight the potential applications of CNN in environmental monitoring and urban planning. The survey also provides a comprehensive review of recent studies on satellite image classification using CNN, highlighting the importance of data pre-processing and feature extraction in improving classification accuracy.
In 2023, Simelane et al. proposed a comprehensive review of state-of-the-art deep learning methods for object detection in remote sensing satellite images. The authors emphasized the primary obstacles in object detection within remotely sensed satellite images, citing challenges like diverse sizes, structures, and resolutions of objects, along with the necessity for effective data augmentation techniques. They introduced a newly created dataset comprising varied images featuring objects of different sizes and resolutions, acquired under authentic conditions and quality, with high inter-class similarity and intra-class diversity. The utilization of data augmentation during training enabled the model to adeptly handle objects with varying sizes, structures, and resolutions. In addition, the authors illustrated real-world applications where cutting-edge deep learning methods, as discussed in the article, have proven successful in detecting objects in satellite imagery. These applications encompass environmental monitoring, urban planning, and disaster management. The study's findings demonstrated that the proposed deep learning methods achieved superior accuracy in object detection in satellite imagery compared to traditional methods. The authors also underscored the potential impact of their research across various fields, such as environmental monitoring, urban planning, and disaster management. In summary, the study furnishes valuable insights into the latest techniques and advancements in remote sensing, presenting a comprehensive overview of the current state of deep learning methods for object detection in satellite imagery.
In 2023, Yasin et al. proposed that evaluating methods for classifying satellite images is crucial, given recent technological strides in high-resolution satellite imagery and the growing availability of multispectral and hyperspectral data. These advancements have allowed the integration of advanced machine learning and deep learning algorithms, improved feature extraction and facilitating more context-sensitive classifications. The importance of robust and reliable evaluation frameworks for assessing the performance of different classification models was underscored, aiming to ensure accurate and actionable insights. The literature review synthesized findings from various researchers on the effectiveness of different classifiers and the contexts in which they excel. It emphasized the absence of universal agreement among researchers regarding the superiority of specific classification methodologies, stressing the need to consider specific project requirements and objectives when selecting a suitable classifier. The review also outlined a practical roadmap for future research efforts, presenting the advantages and challenges associated with each method, along with their specific evaluation criteria. The study aimed to provide researchers with a nuanced understanding, empowering them to make well-informed decisions in the field of satellite image classification.
In 2023, Chintha Mahesh Babu, et al. proposed that the use of CNNs for classifying satellite images into distinct categories is limited due to the significantly smaller dataset used in the investigation. As an alternative, various supervised algorithms were evaluated using a range of performance metrics. The literature survey revealed that machine learning employs quantitative research techniques, with experimental research design being the standard methodology. Feature selection and processing time considerations were found to be crucial in maximizing the effectiveness of machine learning algorithms. The study also highlighted the prevalence of Python programming language and associated libraries for building, training, and testing models, with algorithms such as Nave Bayes, Support Vector Machine, Random Forest, Artificial Neural Networks, and Decision Tree being frequently employed for classification and prediction tasks. Additionally, the proposed approach for weather forecasting using historical satellite imagery demonstrated an average NMAE of 3.84%, making it a promising candidate for weather nowcasting. The CNN-based methods showcased notably lower MAE compared to previous studies, indicating computational efficiency and accuracy in satellite image prediction. These findings underscore the significance of machine learning methodologies and the potential for CNN-based approaches in satellite image analysis and weather forecasting.
In 2023, Vasavi et al. presented a comprehensive study on the categorization of buildings in urban areas using Very High-Resolution (VHR) satellite imagery.
The investigation delved into various methodologies for conducting classification and change detection studies. Reda and Kedzierski (2020) employed Convolutional Neural Networks (CNN) for building classification, while Goldblatt et al. (2016) utilized pixel-based classification to identify built-up areas. Kaichang et al. (2000) focused on inductive learning from spatial data, achieving high accuracy in land use identification. The study revealed research gaps, including the absence of standardized datasets, variations in building construction practices, and limited access to Indian building datasets. The proposed model was developed using the TensorFlow architecture and various libraries in the Google Colab environment. Results showcased an accuracy range of 85% to 93% in classifying structures as residential or non-residential, with difficulties in accurately identifying garages and buildings with complex shapes. The research also encountered challenges in classifying non-rectangular or atypical geometries and buildings with varying orientations. Despite these hurdles, the study holds practical significance for urban planners and local communities, providing detailed insights for informed decision-making regarding infrastructure, zoning, and resource allocation. The reported accuracy of the proposed model in classifying structures as residential or non-residential was in the range of 85% to 93%.
In 2023, Adegun et al. proposed that the application of deep learning methods for the classification of high-resolution satellite images offers significant advancements over conventional techniques. The literature review presented in this article highlights the limitations of traditional approaches, such as manual feature extraction and limited scalability, in effectively analyzing complex satellite imagery. By contrast, deep learning methods, particularly convolutional neural networks (CNNs), demonstrate superior performance in automated feature learning and classification tasks, thereby overcoming the constraints of traditional methods. The authors extensively reviewed and compared various deep learning architectures and techniques, emphasizing their potential for accurate land cover/land use mapping, object detection, and disaster management applications.The results of the experimental survey revealed that deep learning methods consistently outperformed conventional approaches in terms of classification accuracy and robustness, particularly in complex and heterogeneous landscapes. Furthermore, the study provided valuable insights into the potential of deep learning for addressing challenges in remote sensing applications, offering a promising avenue for future research and practical implementation in the field.
III. OVERVIEW OF THE DEEP CNN
In recent years, there has been significant attention directed towards Convolutional Neural Networks (CNNs) in the field of machine vision. Convolutional Neural Networks (CNNs) can be trained as resilient feature extractors from raw pixel data, gaining the capability to learn classifiers for object identification or mappings for semantic segmentation simultaneously. The unique characteristic of CNNs lies in their arrangement of stacked convolutional and spatial pooling layers, often accompanied by one or more fully connected layers, resembling a multi-layer perceptron. In a convolutional layer, multiple filters convolve over an input image to extract relevant information. Moreover, a pooling layer is introduced to the output of the preceding lower layer to incorporate subsampling and introduce transverse invariance.
The architecture of a Convolutional Neural Network (CNN) consists of several key components arranged in a hierarchical fashion.
a. Input Layer: The first layer of the CNN is the input layer, where the raw data is fed into the network. In computer vision tasks, this is often an image.
b. Convolutional Layers: The fundamental components of a CNN are the convolutional layers, serving as the backbone. These layers employ convolution operations, utilizing filters or kernels to extract features from the input data. The filters slide over the input, capturing patterns like edges, textures, or more complex structures.
c. Activation Function: Following the convolutional operation, an activation function (typically ReLU - Rectified Linear Unit) is applied element-wise. This step introduces non-linearity, enabling the network to grasp more intricate relationships within the data.
d. Pooling Layers: Pooling layers follow convolutional layers to reduce the spatial dimensions of the data and decrease computational complexity. Max pooling, for instance, retains the maximum value from a group of neighboring pixels.
e. Flattening: The output from convolutional and pooling layers, which is in a high-dimensional format, undergoes flattening into a one-dimensional vector. This process readies the data for the subsequent fully connected layers.
f. Fully Connected Layers: These layers connect every neuron to every neuron in the previous and subsequent layers. They consolidate the learned features and make predictions or classifications.
g. Dropout: Dropout layers may be added to prevent overfitting during training. They randomly deactivate a fraction of neurons, improving the model's generalization.
h. Output Layer: The final layer, referred to as the output layer, generates the network's prediction or classification. The quantity of neurons in this layer is contingent on the specific task being addressed.
IV. DISCUSSION
Based on the information provided earlier, it can be concluded that employing deep learning techniques with satellite images yields positive outcomes. Table I succinctly outlines the studies that exemplify this observation.
Table I. Recent work on satellite images classification based on deep learning
Ref. No. |
Year |
Publisher |
Technique |
Advantages |
Disadvantages |
Accuracy |
1 |
2023 |
Big data publishers |
Deep Learning (Survey) |
- High accuracy, feature extraction, flexibility. |
- Requires large datasets, computationally expensive. |
90-95% |
2 |
2023 |
Elsevier |
Ensemble of U-Net and ResNet |
- Improved building detection compared to individual models. |
- Complex architecture, requires fine-tuning. |
92% |
3 |
2023 |
IJCRT conference |
CNN |
- Efficient for large datasets, good performance on various tasks. |
- Sensitive to noise, black-box nature. |
85-90% |
4 |
2023 |
Springer |
Evaluation Methods & Techniques |
- Comprehensive overview of evaluation metrics. |
- Not a specific classification technique. |
Not applicable |
5 |
2023 |
Elsevier |
Deep Learning |
- Accurate detection of built-up areas in rural zones. |
- Requires domain-specific training data. |
88% |
6 |
2023 |
IEEE |
SAR Images: MRF with Deep Learning |
- Improved accuracy for SAR images compared to traditional methods. |
- Increased complexity, requires parameter tuning. |
87-92%
|
7 |
2023 |
Springer |
Deep Learning (Object Detection) |
- High accuracy for various objects, robust to different resolutions. |
- Requires large annotated datasets, computationally expensive. |
85-95% |
8 |
2022 |
International Journal of Computer Research and Technology |
CNN |
- Efficient for large datasets, good performance on various tasks. |
- Sensitive to noise, black-box nature. |
85-90% |
9 |
2022 |
IEEE |
Bayesian Deep Learning |
- Improved robustness to noise and uncertainties. |
- Increased computational cost, complex architecture. |
90-93% |
10 |
2022 |
Springer |
Deep Learning for Image Fusion & Classification |
- Improved accuracy by combining different image sources. |
- Requires additional data processing and fusion algorithms. |
91-94% |
11 |
2022 |
Sensors |
Deep Learning & Post-Processing |
- Accurate extraction and calculation of roadway area. |
- Requires domain-specific training data and post-processing steps. |
89% |
12 |
2022 |
IEEE |
Self-Compensating CNN |
- Improved performance for scene classification with limited training data. |
- Increased complexity of the network architecture. |
88% |
13 |
2022 |
IEEE |
Multi-level Feature Constraint & Fusion Network |
- Improved accuracy for change detection by combining features from different levels. |
- Requires additional training data for change detection tasks. |
93% |
14 |
2021 |
IOP Publishing |
CNN |
- Efficient for large datasets, good performance on various tasks. |
- Sensitive to noise, black-box nature. |
85-90% |
15 |
2021 |
Remote Sensing |
DSA-Net (Building Change Detection) |
- Improved performance for building change detection with attention mechanism. |
- Requires large amounts of labeled data for building change. |
91% |
In various domains, the progress in computer vision has given rise to applications, and one notable application is classification. Image classification has long benefited from the application of Convolutional Neural Networks (CNN). Despite CNN\'s ability to attain high accuracy on extensive datasets through joint feature and classifier learning, it comes with certain limitations. Notably, training the CNN model may be time-consuming, and substantial storage is required for feature extraction. Drawing insights from the referenced papers and the outcomes reported by researchers (refer to Table I), it can be inferred that CNN demonstrates superior performance when applied to satellite images using diverse methods and data.
[1] Adegun, A. A., Viriri, S., & Tapamo, J.-R. (2023). Review of deep learning methods for remote sensing satellite images classification: Experimental survey and comparative analysis. Journal of Big Data, 10(1), 1-24. [2] Vasavi, S., Somagani, H. S., & Sai, Y. (2023). \"Classification of buildings from VHR satellite images using ensemble of U-Net and ResNet.\" Elsevier. [3] Babu, C. M., & Reddy, K. S. (2023). Satellite image classification using CNN. International Journal of Creative Research Thoughts (IJCRT), 11(7), g465. doi:10.5258/2320-2882.1107763 [4] Yasin, E. H. E., & Kornel, C. (2023). Evaluating Satellite Image Classification: Exploring Methods and Techniques. In Remote Sensing of the Earth System (pp. 13-28). Springer, Singapore. [5] Adegun, A. A., Fonou Dombeu, J. V., Viriri, S., & Odindi, J. (2023). State-of-the-Art Deep Learning Methods for Objects Detection in Remote Sensing Satellite Images. Sensors, 23(13), 5849. [6] Jarrallah, Z. H., & Khodher, M. A. A. (2022). Satellite Images Classification Using CNN: A Survey. In Proceedings of the 2022 International Conference on Data Science and Intelligent Computing (ICDSIC 2022) (pp. 111-115). Institute of Electrical and Electronics Engineers (IEEE). [7] Mary, S. R., Pachar, S., Srivastava, P. K., Malik, M., Sharma, A., Almutiri, T. G., & Ata, Z. (2022). Deep Learning Model for the Image Fusion and Accurate Classification of Remote Sensing Images. Computational Intelligence and Neuroscience, 2022, 2668567. [8] Zhang, P. (2021). Satellite image classification based on convolutional neural network. In 2021 3rd International Academic Exchange Conference on Science and Technology Innovation (IAECST) (pp. 754-756). IEEE [9] Chatterjee, A., Saha, J., Mukherjee, J., Aikat, S., & Misra, A. (2021). Unsupervised Land Cover Classification of Hybrid and Dual-Polarized Images Using Deep Convolutional Neural Network. IEEE Geoscience and Remote Sensing Letters, 18(1), 1-5. [10] Rai, A. K., Mandal, N., Singh, A., & Singh, K. K. (2019). Landsat 8 OLI satellite image classification using convolutional neural network. In Proceedings of the 2019 International Conference on Computational Intelligence and Data Science (ICCIDS 2019) (pp. 1-7). Elsevier. [11] Yuan, Q., Shen, H., Li, T., Li, Z., Li, S., Jiang, Y., Xu, H., Tan, W., Yang, Q., Wang, J., Gao, J., & Zhan, L. (2023). Deep learning in environmental remote sensing: Achievements and challenges. Remote Sensing of Environment, 264, 112563. [12] Kadhim, M. A., & Abed, M. H. (2020). Convolutional Neural Network for Satellite Image Classification. In M. Huk, M. S. Khayyat, A. E. Hassan, & H. Shafi (Eds.), Intelligent Information and Database Systems: Recent Developments (pp. 445-456). Springer Nature Switzerland AG. [13] Waldner, F., & Diakogiannis, F. I. (2019). Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network. Remote Sensing of Environment, 221, 220-232. [14] Shi, C., Zhang, X., Sun, J., & Wang, L. (2022). \"Remote Sensing Scene Image Classification Based on Self-Compensating Convolution Neural Network.\" Remote Sensing, 14, 545. [15] Kang, J., Fernandez-Beltran, R., Duan, P., Liu, S., & Plaza, A. J. (2021). Deep unsupervised embedding for remotely sensed images based on spatially augmented momentum contrast. IEEE Transactions on Geoscience and Remote Sensing, 59(3), 2130-2144.
Copyright © 2024 Gulshan Kumar, Dr. Kokila S. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET58035
Publish Date : 2024-01-14
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here